Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Data ; 11(1): 416, 2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38653806

ABSTRACT

Our sense of hearing is mediated by cochlear hair cells, of which there are two types organized in one row of inner hair cells and three rows of outer hair cells. Each cochlea contains 5-15 thousand terminally differentiated hair cells, and their survival is essential for hearing as they do not regenerate after insult. It is often desirable in hearing research to quantify the number of hair cells within cochlear samples, in both pathological conditions, and in response to treatment. Machine learning can be used to automate the quantification process but requires a vast and diverse dataset for effective training. In this study, we present a large collection of annotated cochlear hair-cell datasets, labeled with commonly used hair-cell markers and imaged using various fluorescence microscopy techniques. The collection includes samples from mouse, rat, guinea pig, pig, primate, and human cochlear tissue, from normal conditions and following in-vivo and in-vitro ototoxic drug application. The dataset includes over 107,000 hair cells which have been identified and annotated as either inner or outer hair cells. This dataset is the result of a collaborative effort from multiple laboratories and has been carefully curated to represent a variety of imaging techniques. With suggested usage parameters and a well-described annotation procedure, this collection can facilitate the development of generalizable cochlear hair-cell detection models or serve as a starting point for fine-tuning models for other analysis tasks. By providing this dataset, we aim to give other hearing research groups the opportunity to develop their own tools with which to analyze cochlear imaging data more fully, accurately, and with greater ease.


Subject(s)
Cochlea , Animals , Mice , Guinea Pigs , Humans , Rats , Swine , Hair Cells, Auditory , Microscopy, Fluorescence , Machine Learning
2.
bioRxiv ; 2023 Sep 01.
Article in English | MEDLINE | ID: mdl-37693382

ABSTRACT

Our sense of hearing is mediated by cochlear hair cells, localized within the sensory epithelium called the organ of Corti. There are two types of hair cells in the cochlea, which are organized in one row of inner hair cells and three rows of outer hair cells. Each cochlea contains a few thousands of hair cells, and their survival is essential for our perception of sound because they are terminally differentiated and do not regenerate after insult. It is often desirable in hearing research to quantify the number of hair cells within cochlear samples, in both pathological conditions, and in response to treatment. However, the sheer number of cells along the cochlea makes manual quantification impractical. Machine learning can be used to overcome this challenge by automating the quantification process but requires a vast and diverse dataset for effective training. In this study, we present a large collection of annotated cochlear hair-cell datasets, labeled with commonly used hair-cell markers and imaged using various fluorescence microscopy techniques. The collection includes samples from mouse, human, pig and guinea pig cochlear tissue, from normal conditions and following in-vivo and in-vitro ototoxic drug application. The dataset includes over 90'000 hair cells, all of which have been manually identified and annotated as one of two cell types: inner hair cells and outer hair cells. This dataset is the result of a collaborative effort from multiple laboratories and has been carefully curated to represent a variety of imaging techniques. With suggested usage parameters and a well-described annotation procedure, this collection can facilitate the development of generalizable cochlear hair cell detection models or serve as a starting point for fine-tuning models for other analysis tasks. By providing this dataset, we aim to supply other groups within the hearing research community with the opportunity to develop their own tools with which to analyze cochlear imaging data more fully, accurately, and with greater ease.

3.
PLoS Biol ; 21(3): e3002041, 2023 03.
Article in English | MEDLINE | ID: mdl-36947567

ABSTRACT

Our sense of hearing is mediated by sensory hair cells, precisely arranged and highly specialized cells subdivided into outer hair cells (OHCs) and inner hair cells (IHCs). Light microscopy tools allow for imaging of auditory hair cells along the full length of the cochlea, often yielding more data than feasible to manually analyze. Currently, there are no widely applicable tools for fast, unsupervised, unbiased, and comprehensive image analysis of auditory hair cells that work well either with imaging datasets containing an entire cochlea or smaller sampled regions. Here, we present a highly accurate machine learning-based hair cell analysis toolbox (HCAT) for the comprehensive analysis of whole cochleae (or smaller regions of interest) across light microscopy imaging modalities and species. The HCAT is a software that automates common image analysis tasks such as counting hair cells, classifying them by subtype (IHCs versus OHCs), determining their best frequency based on their location along the cochlea, and generating cochleograms. These automated tools remove a considerable barrier in cochlear image analysis, allowing for faster, unbiased, and more comprehensive data analysis practices. Furthermore, HCAT can serve as a template for deep learning-based detection tasks in other types of biological tissue: With some training data, HCAT's core codebase can be trained to develop a custom deep learning detection model for any object on an image.


Subject(s)
Cochlea , Hair Cells, Vestibular , Hair Cells, Auditory, Inner/metabolism , Hair Cells, Auditory, Outer/metabolism , Hearing
SELECTION OF CITATIONS
SEARCH DETAIL
...